105 research outputs found
Grouping variables in an underdetermined system for invariant object recognition
Poster presentation: Introduction We study the problem of object recognition invariant to transformations, such as translation, rotation and scale. A system is underdetermined if its degrees of freedom (number of possible transformations and potential objects) exceed the available information (image size). The regularization theory solves this problem by adding constraints [1]. It is unclear what constraints biological systems use. We suggest that rather than seeking constraints, an underdetermined system can make decisions based on available information by grouping its variables. We propose a dynamical system as a minimum system for invariant recognition to demonstrate this strategy. ..
Activity-dependent bidirectional plasticity and homeostasis regulation governing structure formation in a model of layered visual memory
Poster presentation: Our work deals with the self-organization [1] of a memory structure that includes multiple hierarchical levels with massive recurrent communication within and between them. Such structure has to provide a representational basis for the relevant objects to be stored and recalled in a rapid and efficient way. Assuming that the object patterns consist of many spatially distributed local features, a problem of parts-based learning is posed. We speculate on the neural mechanisms governing the process of the structure formation and demonstrate their functionality on the task of human face recognition. The model we propose is based on two consecutive layers of distributed cortical modules, which in turn contain subunits receiving common afferents and bounded by common lateral inhibition (Figure 1). In the initial state, the connectivity between and within the layers is homogeneous, all types of synapses – bottom-up, lateral and top-down – being plastic. During the iterative learning, the lower layer of the system is exposed to the Gabor filter banks extracted from local points on the face images. Facing an unsupervised learning problem, the system is able to develop synaptic structure capturing local features and their relations on the lower level, as well as the global identity of the person at the higher level of processing, improving gradually its recognition performance with learning time. ..
A system for person-independent hand posture recognition against complex backgrounds
A computer vision system for non-independent recognition of hand postures against complex background is presented. The system is based on Elastic Graph Matching (EGM), which was extended to allow for combinations of different feature types at the graph nodes
MĂĽhsame Detektivarbeit : die Memorik als Herausforderung fĂĽr die Geschichtswissenschaft
Rezension zu: Johannes Fried : Der Schleier der Erinnerung. GrundzĂĽge einer historischen Memorik, C.H. Beck Verlag, MĂĽnchen 2004, ISBN 3406522114, 512 Seiten, 39,90 Euro
Feature-driven Emergence of Model Graphs for Object Recognition and Categorization
An important requirement for the expression of cognitive structures
is the ability to form mental objects by rapidly binding together
constituent parts. In this sense, one may conceive the brain\u27s data
structure to have the form of graphs whose nodes are labeled with
elementary features. These provide a versatile data format with the
additional ability to render the structure of any mental object.
Because of the multitude of possible object variations the graphs
are required to be dynamic. Upon presentation of an image a
so-called model graph should rapidly emerge by binding together
memorized subgraphs derived from earlier learning examples driven by the
image features. In this model, the richness and flexibility of the
mind is made possible by a combinatorical game of immense
complexity. Consequently, the emergence of model graphs is a
laborious task which, in computer vision, has most often been
disregarded in favor of employing model graphs tailored to specific
object categories like, for instance, faces in frontal pose.
Recognition or categorization of arbitrary objects, however, demands
dynamic graphs.
In this work we propose a form of graph dynamics, which proceeds in
two steps. In the first step component classifiers, which decide
whether a feature is present in an image, are learned from training
images. For processing arbitrary objects, features are small
localized grid graphs, so-called parquet graphs, whose nodes are
attributed with Gabor amplitudes. Through combination of these
classifiers into a linear discriminant that conforms to Linsker\u27s
infomax principle a weighted majority voting scheme is implemented.
It allows for preselection of salient learning examples, so-called
model candidates, and likewise for preselection of categories the
object in the presented image supposably belongs to. Each model
candidate is verified in a second step using a variant of elastic
graph matching, a standard correspondence-based technique for face
and object recognition. To further differentiate between model
candidates with similar features it is asserted that the features be
in similar spatial arrangement for the model to be selected. Model
graphs are constructed dynamically by assembling model features into
larger graphs according to their spatial arrangement. From the
viewpoint of pattern recognition, the presented technique is a
combination of a discriminative (feature-based) and a generative
(correspondence-based) classifier while the majority voting scheme
implemented in the feature-based part is an extension of existing
multiple feature subset methods.
We report the results of experiments on standard databases for
object recognition and categorization. The method achieved high
recognition rates on identity, object category, pose, and
illumination type. Unlike many other models the presented
technique can also cope with varying background, multiple objects,
and partial occlusion
Binding in Models of Perception and Brain Function
The development of the issue of binding as fundamental to neural dynamics has made possible recent advances in the modeling of difficult problems of perception and brain function. Among them is perceptual segmentation, invariant pattern recognition and one-shot learning. Also, longer-term conceptual developments that have led to this success are reviewed
The Correlation Theory of Brain Function
A summary of brain theory is given so far as it is contained within the framework of Localization Theory. Diffculties of this "conventional theory" are traced back to a specific deficiency: there is no way to express relations between active cells (as for instance their representing parts of the same object). A new theory is proposed to cure this deficiency. It introduces a new kind of dynamical control, termed synaptic modulation, according to which synapses switch between a conducting and a non- conducting state. The dynamics of this variable is controlled on a fast time scale by correlations in the temporal fine structure of cellular signals. Furthermore, conventional synaptic plasticity is replaced by a refined version. Synaptic modulation and plasticity form the basis for short-term and long-term memory, respectively. Signal correlations, shaped by the variable network, express structure and relationships within objects. In particular, the figure-ground problem may be solved in this way. Synaptic modulation introduces flexibility into cerebral networks which is necessary to solve the invariance problem. Since momentarily useless connections are deactivated, interference between different memory traces can be reduced, and memory capacity increased, in comparison with conventional associative memory
The What and Why of Binding: The Modeler's Perspective
In attempts to formulate a computational understanding of brain function,
one of the fundamental concerns is the data structure by which the brain
represents information. For many decades, a conceptual framework has
dominated the thinking of both brain modelers and neurobiologists. That
framework is referred to here as "classical neural networks." It is well
supported by experimental data, although it may be incomplete. A
characterization of this framework will be offered in the next section.
Difficulties in modeling important functional aspects of the brain on the
basis of classical neural networks alone have led to the recognition that
another, general mechanism must be invoked to explain brain function. That
mechanism I call "binding." Binding by neural signal synchrony had been
mentioned several times in the liter ature (Lege´ndy, 1970; Milner, 1974)
before it was fully formulated as a general phenomenon (von der Malsburg,
1981). Although experimental evidence for neural syn chrony was soon found,
the idea was largely ignored for many years. Only recently has it become a
topic of animated discussion. In what follows, I will summarize the nature
and the roots of the idea of binding, especially of temporal binding, and
will discuss some of the objec tions raised against it
- …